Singapore launches world’s first AI testing framework

Invites companies to pilot and contribute to international standards development.

Singapore launches A.I. Verify – the world’s first AI Governance Testing Framework and Toolkit for companies who want to demonstrate responsible AI in an objective and verifiable manner. This was announced by Singapore’s Minister for Communications and Information Josephine Teo at the World Economic Forum Annual Meeting in Davos. A.I. Verify – currently a Minimum Viable Product (MVP), aims to promote transparency between companies and their stakeholders through a combination of technical tests and process checks.

Globally, testing for the trustworthiness for AI systems is an emergent space. As more companies use AI in their products and services, fostering public’s trust in AI technologies remains key in unlocking the transformative opportunities of AI.

Singapore remains at the forefront of international discourse on AI ethics

The launch of A.I. Verify follows Singapore’s launch of the Model AI Governance Framework (second edition) in Davos in 2020, and the National AI Strategy in November 2019. Having provided practical detailed guidance to industry on implementing responsible AI, A.I. Verify is Singapore’s next step in helping companies be more transparent about their AI products and services, to build trust with their stakeholders. A.I. Verify is developed by the Infocomm Media Development Authority (IMDA) and the Personal Data Protection Commission (PDPC).

Objective and verifiable testing process for industry

Developers and owners can verify the claimed performance of their AI systems against a set of principles through standardised tests. A.I. Verify packages a set of open-source testing solutions together, including a set of process checks into a Toolkit for convenient self-assessment. The Toolkit will generate reports for developers, management, and business partners, covering major areas affecting AI performance. The approach is to allow transparency, of what the AI model claims to do vis-à-vis the test results, and covers areas such as:

  1. Transparency:
    1. On the use of AI to achieve what stated outcome
    2. Understanding how the AI model reaches a decision
    3. Whether the decisions predicted by the AI show unintended bias
  2. Safety and resilience of AI system.
  3. Accountability and oversight of AI systems.

The Pilot Testing Framework and Toolkit:

  1. Allows AI system developers/owners to conduct self-testing – to maintain commercial requirements while providing a common basis to declare results.
  2. Does not define ethical standards. It validates AI system developer’s/owner’s claims about the approach, use, and verified performance of their AI systems.

Does not however guarantee that any AI system tested under this Pilot Framework will be free from risks or biases or is completely safe.

Minister for Communications and Information Josephine Teo said, “A.I. Verify is another step forward in Singapore’s AI development. In developing the world’s first product to demonstrate responsible AI in an objective and verifiable manner, we aim to help businesses become more transparent to their stakeholders in the A.I use. This will, in turn, promote greater public trust towards the use of AI. We invite industry partners from all around the world to join us in this pilot and contribute to building international standards in AI governance.”

Also commenting on the launch, Chia Song Hwee, Deputy CEO, Temasek International and member of Singapore’s Advisory Council on the Ethical Use of AI and Data said, “I would like to congratulate IMDA for taking responsible AI to the next milestone with the launch of this testing framework and toolkit. Rapid digitisation has led to a proliferation of data and improved algorithms. As companies across sectors continue to innovate, this toolkit will enable them to turn concepts of responsible and trustworthy AI into practical applications that will benefit all stakeholders, from business owners to end users.”

Building interoperability of trustworthy AI with partners and industry

Developed under the guidance of the Advisory Council on the Ethical Use of AI and Data, 10 companies from different sectors and of different scale, have already tested and/or provided feedback. These companies are AWS, DBS Bank, Google, Meta, Microsoft, Singapore Airlines, NCS (Part of Singtel Group)/Land Transport Authority, Standard Chartered Bank, UCARE.AI, and X0PA.AI.

In addition, Singapore is engaging other like-minded countries and partners to enhance the interoperability of AI governance frameworks and to develop international standards on AI such as through Singapore’s participation in ISO/IEC JTC1/SC 42 on Artificial Intelligence. IMDA is also working together with the U.S. Department of Commerce to build interoperable AI governance frameworks. Beyond the pilot stage of the MVP, Singapore aims to work with AI system owners/developers globally to collate and build industry benchmarks. This enables Singapore to continue contributing to the development of international standards on AI governance.

Organisations invited to participate in pilot

As AI governance and testing is nascent, Singapore welcomes organisations to participate in piloting the MVP. Companies participating in the pilot will have the unique opportunity to:

  1. Gain early access to the MVP and use it to conduct self-testing on their AI systems/models.
  2. Use MVP-generated reports to demonstrate transparency and build trust with their stakeholders; and
  3. Help shape an internationally applicable MVP to reflect industry needs and contribute to international standards development.

 

Tags:

Leave a Comment

Related posts